Speedup of Distributed Programs on a Network of Shared Processors

نویسنده

  • Sung Hyun Cho
چکیده

This paper presents an alternative to process migration , called competition, to speed up distributed programs in the background on a network of processors. Competition protocols are transparent operating system facilities that involve creating multiple instances (called clones) p 1 , p 2 , etc. of a process P on diier-ent processors, and making clones \compete", i.e., attempting to guarantee that the output of the clone that is farthest \ahead" is fed to the rest of the computation , and that the entire application's performance tracks that of the clone which is farthest ahead. We show that competition protocols ooer performance beneets that are as good as or better than migration protocols for distributed programs under comparable assumptions. We also demonstrate that competition protocols ooer even more speedup for distributed programs than for sequential programs.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Speeding up the Stress Analysis of Hollow Circular FGM Cylinders by Parallel Finite Element Method

In this article, a parallel computer program is implemented, based on Finite Element Method, to speed up the analysis of hollow circular cylinders, made from Functionally Graded Materials (FGMs). FGMs are inhomogeneous materials, which their composition gradually varies over volume. In parallel processing, an algorithm is first divided to independent tasks, which may use individual or shared da...

متن کامل

Architecture - Independent Parallelism for Both Shared - and Distributed Memory Machines using theFilaments

This paper presents the Filaments package, which can be used to create architecture-independent parallel programs|that is, programs that are portable and eecient across vastly diierent parallel machines. Filaments virtualizes the underlying machine in terms of the number of processors and the interconnection. This simpliies parallel program design in two ways. First, programs can be written (or...

متن کامل

Concord: Re-Thinking the Division of Labor in a Distributed Shared Memory System

A distributed shared memory system provides the abstraction of a shared address space on either a network of workstations or a distributed-memory multiprocessor. Although a distributed shared memory system can improve performance by relaxing the memory consistency model and maintaining memory coherence at a granularity speci ed by the programmer, the challenge is to o er ease of programming whi...

متن کامل

Performance Evaluation of Parallel Sorting in Shared and Distributed Memory Systems

In this paper, we investigate the performance of two different parallel sorting algorithms (PSRS and BRADIX) in both distributed and shared memory systems. These two algorithms are technically very different than each other. PSRS is a computation intensive algorithm whereas BRADIX relies mainly on communication among processors to sort data items. We observe that a communication intensive algor...

متن کامل

An Adaptive Distributed Algorithm for Sequential Circuit Test

We describe the parallelization of sequential circuit test generation on an Ethernet-connected network of SUN workstations. We use the observations of the previous work to execute the program in two phases. All processors simultaneously run the test generation program, Gentest. In the rst phase, the fault list is equally divided among processors, each of which derives tests for targets from its...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1996